Enhancing Quality of Pose-varied Face Restoration with Local Weak Feature Sensing and GAN Prior (2205.14377v3)
Abstract: Facial semantic guidance (including facial landmarks, facial heatmaps, and facial parsing maps) and facial generative adversarial networks (GAN) prior have been widely used in blind face restoration (BFR) in recent years. Although existing BFR methods have achieved good performance in ordinary cases, these solutions have limited resilience when applied to face images with serious degradation and pose-varied (e.g., looking right, looking left, laughing, etc.) in real-world scenarios. In this work, we propose a well-designed blind face restoration network with generative facial prior. The proposed network is mainly comprised of an asymmetric codec and a StyleGAN2 prior network. In the asymmetric codec, we adopt a mixed multi-path residual block (MMRB) to gradually extract weak texture features of input images, which can better preserve the original facial features and avoid excessive fantasy. The MMRB can also be plug-and-play in other networks. Furthermore, thanks to the affluent and diverse facial priors of the StyleGAN2 model, we adopt it as the primary generator network in our proposed method and specially design a novel self-supervised training strategy to fit the distribution closer to the target and flexibly restore natural and realistic facial details. Extensive experiments on synthetic and real-world datasets demonstrate that our model performs superior to the prior art for face restoration and face super-resolution tasks.
- Lyu, Q., Guo, M., Ma, M.: Boosting attention fusion generative adversarial network for image denoising. Neural Computing and Applications 33(10), 4833–4847 (2021) (2) Minaee, S., Liang, X., Yan, S.: Modern augmented reality: Applications, trends, and future directions. arXiv preprint arXiv:2202.09450 (2022) (3) Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: Fsrnet: End-to-end learning face super-resolution with facial priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2492–2501 (2018) (4) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Minaee, S., Liang, X., Yan, S.: Modern augmented reality: Applications, trends, and future directions. arXiv preprint arXiv:2202.09450 (2022) (3) Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: Fsrnet: End-to-end learning face super-resolution with facial priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2492–2501 (2018) (4) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: Fsrnet: End-to-end learning face super-resolution with facial priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2492–2501 (2018) (4) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Minaee, S., Liang, X., Yan, S.: Modern augmented reality: Applications, trends, and future directions. arXiv preprint arXiv:2202.09450 (2022) (3) Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: Fsrnet: End-to-end learning face super-resolution with facial priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2492–2501 (2018) (4) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: Fsrnet: End-to-end learning face super-resolution with facial priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2492–2501 (2018) (4) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: Fsrnet: End-to-end learning face super-resolution with facial priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2492–2501 (2018) (4) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Sharma, S., Kumar, V.: 3d landmark-based face restoration for recognition using variational autoencoder and triplet loss. IET Biom. 10(1), 87–98 (2021) (5) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Hu, X., Ren, W., Yang, J., Cao, X., Wipf, D., Menze, B., Tong, X., Zha, H.: Face restoration via plug-and-play 3d facial priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12), 8910–8926 (2022). https://doi.org/10.1109/TPAMI.2021.3123085 (6) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2287–2296 (2021) (7) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11896–11905 (2021) (8) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Bulat, A., Tzimiropoulos, G.: Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 109–117 (2018) (9) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Proc. Comput. Vis. (ECCV), pp. 399–415 (2020). Springer (10) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Li, X., Zhang, S., Zhou, S., Zhang, L., Zuo, W.: Learning dual memory dictionaries for blind face restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) (11) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Wang, Z., Zhang, J., Chen, R., Wang, W., Luo, P.: Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17512–17521 (2022) (12) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 4401–4410 (2019) (13) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 8110–8119 (2020) (14) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2437–2445 (2020) (15) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 9168–9178 (2021) (16) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Yang, T., Ren, P., Xie, X., Zhang, L.: Gan prior embedded network for blind face restoration in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 672–681 (2021) (17) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- He, J., Shi, W., Chen, K., Fu, L., Dong, C.: Gcfsr: a generative and controllable face super resolution method without facial and gan priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022) (18) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Wang, Y., Hu, Y., Zhang, J.: Panini-net: Gan prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2022) (19) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Mo, S., Cho, M., Shin, J.: Freeze the discriminator: a simple baseline for fine-tuning gans. In: CVPR AI for Content Creation Workshop (2020) (20) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 1–19 (2022) (21) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: A survey. ACM Comput. Surv. 55(1), 1–36 (2021) (22) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Xin, J., Wang, N., Gao, X., Li, J.: Residual attribute attention network for face image super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9054–9061 (2019) (23) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Xin, J., Wang, N., Jiang, X., Li, J., Gao, X., Li, Z.: Facial attribute capsules for noise face super resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12476–12483 (2020) (24) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Jiang, K., Wang, Z., Yi, P., Lu, T., Jiang, J., Xiong, Z.: Dual-path deep fusion network for face image hallucination. IEEE Trans. Neural Networks Learn. Syst. (2020) (25) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Wan, Z., Zhang, B., Chen, D., Zhang, P., Wen, F., Liao, J.: Old photo restoration via deep latent space translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2071–2087 (2022) (26) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. arXiv preprint arXiv:2206.11253 (2022) (27) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Yang, L., Wang, S., Ma, S., Gao, W., Liu, C., Wang, P., Ren, P.: Hifacegan: Face renovation via collaborative suppression and replenishment. In: Proc. ACM Multimedia Conf. (MM), pp. 1551–1560 (2020) (28) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 1–20 (2022) (29) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 12104–12114 (2020) (30) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Huang, J., Liao, J., Kwong, S.: Unsupervised image-to-image translation via pre-trained stylegan2 network. IEEE Trans. Multimedia 24, 1435–1448 (2021) (31) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. ACM Trans. Graph. 40(4), 1–14 (2021) (32) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9 (2015) (33) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proc. AAAI Conf. Artif. Intell. (AAAI) (2017) (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016) (35) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2472–2481 (2018) (36) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 517–532 (2018) (37) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026–1034 (2015) (38) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Jin, H., Liao, S., Shao, L.: Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. Int. J. Comput. Vis. 129(12), 3174–3194 (2021) (39) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2961–2969 (2017) (40) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8798–8807 (2018) (41) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016) (42) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. (ICLR) (2015) (43) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Huang, R., Zhang, S., Li, T., He, R.: Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2439–2448 (2017) (44) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) (45) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pp. 1905–1914 (2021) (46) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3730–3738 (2015) (47) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008) (48) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Jain, V., Learned-Miller, E.: Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst (2010) (49) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014) (50) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. (ICLR) (2014) (51) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) (52) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Jo, B., Cho, D., Park, I.K., Hong, S.: Ifqa: Interpretable face quality assessment. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3444–3453 (2023) (53) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 586–595 (2018) (54) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 30 (2017) (55) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Gu, Y., Wang, X., Xie, L., Dong, C., Li, G., Shan, Y., Cheng, M.-M.: Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV (2022) (56) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020) Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)
- Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5203–5212 (2020)